Commentary: The State of Political Polling in America

by Carl M. Cannon

 

Two years ago, 14 public opinion polls queried Maine voters about their preference in the campaign between incumbent Republican Sen. Susan Collins and Democratic challenger Sara Gideon. Every one of those surveys had Gideon ahead, by margins ranging from 1 percentage point (Bangor Daily News) to 12 points (Quinnipiac). When the election returns rolled in, however, none of them were close. Collins prevailed handily, beating Gideon by 8.6%.

This was an egregious example, but not an isolated case. Moreover, the pollsters’ recent errancy has had a conspicuous bent: In both 2020 and 2016, they have consistently underestimated the Republican vote. And as polling goes through a rough patch, one of the accusations being leveled against the industry – particularly regarding surveys done by university-based pollsters or legacy media organizations – is a partisan tilt.

Besides clucking it in Maine, the pre-election presidential polls were misleading in nearly every swing state in 2020, as they were in 2016, most spectacularly in Wisconsin. Polls were off kilter in the national popular vote as well. Yes, Joe Biden bested Donald Trump by 7 million votes (51.3% to 46.9%), but his winning margin was half as large as the final numbers in several prominent polls. In the final week of the campaign, Quinnipiac, Economist/YouGov, CNBC, and NBC/Wall Street Journal all had Biden winning by double digits. Harvard/Harris, Fox News, and Survey USA had Biden’s lead at 8%. But when the votes were counted, the Democratic ticket won by 4.4%.

“The polls were a stinking pile of hot garbage,” Sean Trende, senior elections analyst for RealClearPolitics, tweeted the following week. The Atlantic magazine pronounced 2020 “a disaster for the polling industry.” Few pollsters disagreed.

“It is distressing,” Steve Koczela, president of The MassINC Polling Group, told WBUR, Boston’s NPR affiliate at the time. Koczela, who has done election polling for WBUR himself, added, “It’s a bummer to wake up after the 2020 election and see again that we’re in a similar place as we were in 2016 as a profession.”

Despite industry-wide soul-searching in two consecutive cycles, the 2021 off-year gubernatorial elections in Virginia and New Jersey were not entirely encouraging. Although the RealClearPolitics poll average did have the eventual winners in both states, some well-known polls were again wide of the mark.

In Virginia, where Republican first-time candidate Glenn Youngkin would defeat former governor Terry McAuliffe by 1.9%, the Trafalgar Group and a Washington, D.C., Fox News affiliate poll by Insider Advantage hit it nearly on the nose – both had the margin at 2%. The Washington Post and USA Today/Suffolk polls both had the winning margin at 1%, but for the losing candidate. No disgrace there; it was a tight election. But pollsters want to get the winner right. They also like to be close to the final margin, which was not the case in New Jersey.

The final Monmouth University poll had New Jersey Gov. Phil Murphy 11% over GOP challenger Jack Ciattarelli – in a race eventually decided by just under 3 points. Monmouth wasn’t alone: Farleigh Dickinson and Stockton University both had the race at 9% in their final poll, while Rutgers Eagleton had it at 8%.

Patrick Murray, director of the Monmouth University Polling Institute, was candid afterward. “I blew it,” he wrote in a Newark Star-Ledger mea culpa. The Monmouth poll, he added, “did not provide an accurate picture of the state of the governor’s race.”

So why are pollsters getting things so wrong? And which polls – and which races – should Americans keep an eye on in the final week of the 2022 midterms?

Let’s take the first question first. American University professor W. Joseph Campbell invokes Leo Tolstoy’s famous admonition in “Anna Karenina” about families: “All happy families are alike; each unhappy family is unhappy in its own way.” In other words, there are many ways to make polling mistakes. Here are seven:

GIGO: In computer science, the acronym stands for “garbage in, garbage out,” a principle that organizations that average polls (RealClearPolitics and 538 are the most prominent) have been contemplating lately. But the same principle applies to individual polls themselves. A basic problem when trying to extrapolate the voting preferences of millions of people based on a sample of some 600 is that the polling sample can turn out to be unrepresentative of the electorate. Too many Democrats, for example, or too few. The challenge has grown exponentially because of modern technology. Cord-cutters, mobile phones, call screening devices, an explosion in the number of polls – all these factors and more have helped produce very low response rates.

Pollsters themselves were the first to recognize the problem, but it’s proven a tricky one to fix. Steve Koczela noted after the 2020 shortcomings that pollsters thought they’d addressed the big problem of 2016, which was underestimating – and therefore undercounting – Trump’s appeal among white voters without college degrees. Yet it happened again four years later.

“We all made our best attempt to solve all of the methodological issues that we identified after 2016,” he said. They apparently didn’t weight their samples enough. Meanwhile, new problems arose, including not detecting Trumpism’s ability to pump up turnout among working-class Hispanics.

Whiffing on Voter Turnout: In a piece earlier this week, The New York Times political writer Blake Hounshell put the issue simply, “It’s the biggest mystery of the midterms: Which groups of voters will turn out in the largest numbers?”

The “generic congressional ballot” favors Republicans at this point, but will those numbers hold on Election Day? Another way to ask that question is this: Will voters angered by the Supreme Court’s reversal of Roe v. Wade and alarmed by Republican “election denial” prove more highly motivated than those suffering from inflation and alarmed by rising crime? Pollsters have various ways of detecting enthusiasm gaps, none of which are foolproof. Since 1950, Gallup has explored ways of finding out. It was this effort that originally produced the distinction between “registered voters” and “likely voters.”

Measuring the relative level of voter enthusiasm in each political party does contain subjective components. But factors such as the public’s view of the economy, its view of the incumbent president’s competence, and whether the country is on the right track or wrong track almost always provide a window into what is motivating the electorate — and by how much and on which side of the aisle. On Tuesday, a week before the election, Gallup released an extensive survey trying to get at the very question posed in the Times. Gallup’s answer? Republicans “have the wind at their backs.”

The Quinnipiac University poll asks respondents directly: How motivated are you to vote? The answer in its latest poll was unambiguous: “Republicans Have Edge in Enthusiasm to Vote.”

Failing To Pin Down Undecideds: A second way pollsters can miss what’s happening is by failing to notice how swing voters in battleground states or districts are breaking. This is partly what happened to pollsters in New Jersey last year: They stopped polling too early, just as late-deciding voters were moving toward the underfunded challenger.

In 2016, YouGov pollsters were perplexed by those saying they still hadn’t made up their minds only days before the election. “So we pressed the holdouts,” said Stanford University professor David W. Brady. They were asked, “If you had to vote for either Hillary Clinton or Donald Trump, who would you vote for?” It didn’t really help, and in any event, this was a national poll that didn’t detect late movement toward Trump in several key swing states.

Candidates’ Own Late-Inning Errors: Sometimes, Election Day surprises aren’t solely the pollsters’ fault – or the fault of the polls’ respondents. Candidates themselves can screw up the data. In North Carolina’s 2020 Senate race, for example, all five surveys of the contest between Democrat Cal Cunningham and Republican Thom Tillis showed Cunningham ahead. Then came not one, but two, October surprises.

First, Tillis revealed shortly after the two candidates debated that he had COVID, leading to questions about when he knew he was contagious, and concerns he may have given the virus to Cunningham. Then Cunningham revealed he’d been involved in an extramarital affair during the campaign. He made a vague statement of contrition, refused to say whether this was the first such transgression, and remained cloistered during the last three weeks of the campaign. Perhaps it would have happened anyway, but Cunningham ended up losing by nearly two points.

In 2022, two pivotal Senate races have been buffeted by late-breaking developments. In Georgia, conservative, pro-life Republican challenger Herschel Walker has been hit by a second accusation that he paid for an abortion. (He’s denied both.) Meanwhile, Pennsylvania Democrat John Fetterman insisted on a single debate against Republican Mehmet Oz, and one late in the game: not until Oct. 25. The night of the debate, it became clear why: Fetterman is still dealing with serious health effects from a stroke he suffered in May.

Will either of these two developments matter, or does rank partisanship outweigh such revelations? Measuring the possible fallout is complicated by the fact that Pennsylvania and Georgia have had record numbers of early voting this year.

Missing the “Trump Effect”: Donald J. Trump barged onto the U.S. political scene in 2015 with unparalleled bluster. He questioned Barack Obama’s citizenship, vowed to build a “big beautiful wall” to keep immigrants out, and questioned the integrity of a U.S. district court judge because he was “a Mexican.” Actually, Judge Gonzalo Curiel was born in Indiana. His parents were Mexican immigrants. When questioned on his bizarre logic, Trump doubled down. “I’m building a wall,” he explained. “It’s an inherent conflict of interest.”

And so it went (and continues to this day). Trump routinely kicked established norms in the groin – and the establishment responded in kind. It was an integral part of Trump’s persona, but also his strategy for rising above a large pack of GOP presidential contenders. And it worked. He shot to the top of the polls and stayed there. And at first, the polling was better than the political analysis. Neither the media, which gleefully returned Trump’s declaration of war, nor the Republican Party establishment, could imagine this guy as president. So the cable networks gave him unlimited free airtime during the primary season. His party rivals mostly ignored him until it was too late. A pro-Jeb Bush political action committee spent $100 million attacking Bush’s former protégé Marco Rubio. Gov. Chris Christie launched his own kamikaze attacks on Rubio.

Meanwhile, prominent political analysts – even those who pride themselves on their acuity at interpreting the data – simply dismissed the possibility of a Trump presidential nomination. The guy hadn’t really been a Republican, let alone a conservative. But the data painted a different picture. At the outset of the 2016 primary season, he was polling well in all the early primary states. Before the Iowa caucuses, Trump was ahead in the polls there – and in the first two primary states, New Hampshire and South Carolina.

The pundits and analysts were going with their hearts, not their heads. Think of it this way: How was finishing well in Iowa (he ran second, behind Ted Cruz and just ahead of Rubio) going to lessen his appeal in New Hampshire? The answer was obvious. Trump swamped the field in New Hampshire, then beat Cruz and Rubio by 10 percentage points in South Carolina. By the time Trump was taken seriously, he had a lead he would never relinquish. This pattern would be repeated in the autumn, albeit with a new wrinkle: In the general election, Trump outperformed his polling numbers. But why?

Confusing Alchemy With Science: Wisconsin has been a case study for bad polling in the Trump era. There were problems in all the other battleground states, but Wisconsin was Ground Zero.

On Nov. 2, 2016, Marquette University poll director Charles Franklin walked journalists through his final numbers, which gave Hillary Clinton a 6-point lead over Trump, 46%-40%. That left a telling number of undecideds, which should have given one pause, but the Marquette poll was considered the “gold standard” in Wisconsin.

Even the Trump camp heeded its warning: The Republican nominee canceled a planned Wisconsin rally and stumped that day in Minnesota instead. The voters had other ideas, however. Trump narrowly carried Wisconsin against Clinton. Afterward, a somber Charles Franklin told journalist David Weigel, “No one will never say the Marquette poll is ‘never wrong’ again. “We’ve now been wrong. It’s that simple.”

The problem turned out to be not simple at all. For one thing, Marquette wasn’t alone: All four independent polls done in Wisconsin in 2016, including one by a Republican firm, Remington Research, showed Clinton comfortably ahead. Remington actually had her lead at 8 points.

Despite this fiasco – and vows to address their shortcomings – four years later the polling in Wisconsin was just as bad, and in one spectacular example, much worse.

Although they had the winner right in 2020 (Biden won by .07%, Trump’s same margin of victory in 2016), only Trafalgar nailed it. All the other polls had Biden winning easily: from 3% (Susquehanna) to 17% (Washington Post/ABC News). The Post poll was an outlier, but several other polls were way off, too. Emerson and CNBC each had Biden’s lead at 8 points; Reuters/Ipsos put it at 10, and New York Times/Sienna had Trump losing by 11 points.

That said, some pollsters used the 2016 and 2020 campaigns to make a name for themselves – and a reputation for accuracy. Although no one applied the nickname to the Iowa Poll, done for the Des Moines Register by Selzer & Co., it’s not a stretch to say it became the “gold standard” in the Hawkeye State. There, pollster J. Ann Selzer was the first to notice Trump breaking away from Clinton in 2016 and was the closest in the final tally. She had Trump ahead by 9%; he would win by 9.5%. No other Iowa poll was in the ballpark.

Four years later, Selzer detected a similar trend. Her track record in 2016, when the Iowa Poll was a lonely outlier challenging the conventional wisdom, should have been a bright red flag. The Des Moines Register poll had the race between Trump and Biden tied in September, but Selzer’s data showed Trump starting to pull away in October, just as he had in 2016. Here was a strong sign that 2020 was going to close again in the Midwestern and Rust Belt battleground states. But this warning was largely ignored, even in Iowa. The other Iowa pollsters had the race tied, or within a percentage point or two, while Selzer’s last poll showed Trump over Biden 48%-41%. She was close: Trump won by just over 8%.

In interviews afterward, she credited her detailed knowledge of that state and a more creative methodology. Instead of only weighting her sample by educational attainment, as others did, Selzer added age, gender, and geography to the mix, taking care that her panel of respondents consisted of representative numbers of each Iowa congressional district – as a way of accounting for today’s urban/rural divide.

“It is a science of estimation,” she told a reporter later. “The key word being estimate.”

Pollsters vs. Trump: After the 2016 debacle, pollsters postulated the existence of the “shy Trump voter.” Their theory was that Trump was so grotesque that his own voters were leery of admitting their intentions. There were a couple of things wrong with this explanation, other than its inherently partisan presupposition. First of all, Trump’s raucous supporters seemed anything but bashful. Secondly, why were there so many thousands of “shy” voters in Wisconsin and so few in neighboring (and demographically similar) Minnesota?

At some point, however, it became a self-fulfilling prophecy: A statistically significant number of Trump voters began eschewing pollsters. After Election Day 2020, Charles Franklin addressed this phenomenon in a virtual meeting with the Milwaukee Press Club. “If a small segment of Trump voters are systematically declining to participate in the poll, that can account for why we are systematically understating Trump’s vote,” he said.

To some Trump supporters this smacked of blaming the victim. But if this was a guerilla tactic by a subset of MAGA populists, they received tacit approval from their hero. Always quick to seize on a conspiracy, Trump denounced the wildly flawed 2020 Wisconsin polls as an attempt at “voter suppression.” Other Republicans lashed out, too. “This is garbage!! Fake News,” tweeted former GOP Congressman Sean Duffy.

But crappy polls have more obvious effects than “suppressing” the vote. Campaigns have to allocate resources and their candidates’ time. Hillary Clinton is widely criticized by her fellow Democrats for not campaigning in Michigan in 2016. But she didn’t think she had to: All those polls except for Trafalgar had her safely ahead in Michigan.

Four years later, despite saying privately that its internal polling showed the race closer than the public polls, Biden’s camp tried to run the table in places such as Ohio and Iowa. Although the public polls showed those states as being close, they were not: Biden lost them by identical 8.2% margins. “We were operating in a reality that wasn’t reality, and we were operating off numbers that just were clearly not reflecting what turnout would be,” one Democratic consultant told the Washington Post afterward.

*           *           *

So where does all that leave us on the eve of a highly emotional and hotly contested midterm election that will determine control of Congress? In theory, Trump’s absence from the ballot might ameliorate some of these problems. Or maybe not, especially if avoiding pollsters has become part of the MAGA playbook. And here’s a more loaded question: What if some of these polls are bad because they reflect the political views (or hopes) of the pollsters and media organizations doing them?

Attributing political bias to a pollster is a serious accusation, especially where there are so many other variables. The Digital Age has turned scientific polling into a field in which methodology has become exceedingly complex, mainly because the electorate so elusive. Yet some obvious trends have developed during the 2022 campaign.

Polling experts and political analysts such as Joseph Campbell, Sean Trende, and Nate Silver have taken notice that in key 2022 battleground Senate and gubernatorial races, the Siena College/New York Times poll consistently ranks Democratic candidates higher than the mean, while the opposite is true of the Trafalgar Group.

Nate Silver’s organization, 538, currently gives pollsters letter grades for their performances. It awards the Siena/NYT poll an A-plus, while assigning Trafalgar a grade of A-minus. But next Tuesday is another big exam. We shall see which organization was closer to the mark, even as we learn which political party controls Congress. Meanwhile, RealClearPolitics, which depends on accurate polling for the integrity of its averages, is launching its own “Polling Accountability Initiative,” with the stated goal of being a “positive and constructive force” for improved political and media polling in this country.

Between now and next Tuesday, however, the 2020 admonition of Marquette University pollster Charles Franklin remains salient. “Polls don’t vote,” he tells the public. “The election is in your hands.”

– – –

Carl M. Cannon is the Washington bureau chief for RealClearPolitics and executive editor of RealClearMedia Group. Reach him on Twitter @CarlCannon.
Photo “Pollsters” by Edmond Dantès.

 

 


Content created by  RealClearWire is available without charge to any eligible news publisher. For republishing terms, please contact [email protected].

Related posts

Comments